FO (complexity)

FO is the complexity class of structures which can be recognised by formulae of first-order logic. It is the foundation of the field of descriptive complexity and is equal to the complexity class AC0 FO-regular. Various extensions of FO, formed by the addition of certain operators, give rise to other well-known complexity classes,[1] allowing the complexity of some problems to be proven without having to go to the algorithmic level.

Contents

Definition and examples

The idea

When we use the logic formalism to describe a computational problem, the input is a finite structure, and the elements of that structure are the domain of discourse. Usually the input is either a string (of bits or over an alphabet) the elements of which are positions of the string, or a graph of which the elements are vertices. The length of the input will be measured by the size of the respective structure. Whatever the structure is, we can assume that there are relations that can be tested, for example "E(x,y) is true iff there is an edge from x to y" (in case of the structure being a graph), or "P(n) is true iff the nth letter of the string is 1." These relations are the predicates for the first-order logic system. We also have constants, which are special elements of the respective structure, for example if we want to check reachability in a graph, we will have to choose two constants s (start) and t (terminal).

In descriptive complexity theory we almost always suppose that there is a total order over the elements and that we can check equality between elements. This lets us consider elements as numbers: the element x represents the number n iff there are (n-1) elements y with y<x. Thanks to this we also want the primitive "bit", where bit(x,k) is true if only the kth bit of x is 1. (We can replace addition and multiplication by ternary relations such that plus(x,y,z) is true iff x%2By=z and times(x,y,z) is true iff x*y=z).

Formally

The language FO is then defined as the closure by conjunction ( \wedge), negation (\neg) and universal quantification (\forall) over elements of the structures. We also often use existential quantification (\exists) and disjunction (\vee) but those can be defined by means of the first 3 symbols.

The semantics of the formulae in FO is straightforward, \neg A is true iff A is false, A\wedge B is true iff A is true and B is true, and \forall x P(x) is true iff P(v) is true for all values v that x may take in the underlying universe.

Property

Justification

Since in a computer elements are only pointers, i.e. strings of bits, in descriptive complexity the assumptions that we have an order over the element of the structures make sense. For the same reason we often suppose either a BIT predicate or + and \times, since those primitive functions can be calculated in most of the small complexity classes.

FO without those primitives is more studied in finite model theory, and it is equivalent to smaller complexity classes; those classes are the one decided by relational machine.

Warning

A query in FO will then be to check if a first-order formula is true over a given structure representing the input to the problem. One should not confuse this kind of problem with checking if a quantified boolean formula is true, which is the definition of QBF, which is PSPACE-complete. The difference between those two problems is that in QBF the size of the problem is the size of the formula and elements are just boolean values, whereas in FO the size of the problem is the size of the structure and the formula is fixed.

This is similar to Parameterized complexity but the size of the formula is not a fixed parameter.

Normal form

Every formula is equivalent to a formula in prenex normal form (where all quantifiers are written first, followed a quantifier-free formula).

Operators

FO without any operators

In circuit complexity, FO can be shown to be equal to AC0, the first class in the AC hierarchy. Indeed, there is a natural translation from FO's symbols to nodes of circuits, with \forall, \exists being \land and \lor of size n.

Partial fixed point is PSPACE

FO(PFP) is the set of boolean queries definable in FO where we add a partial fixed point operator.

Let k be an integer, x, y be vectors of k variables, P be a second-order variable of arity k, and \phi be a FO(PFP) function using x and P as variables. We can iteratively define (P_i)_{i\in N} such that P_0(x)=false and P_i(x)=\phi(P_{i-1},x) (meaning \phi with P_{i-1} substituted for the second-order variable P). Then, either there is a fixed point, or the list of (P_i)s is cyclic.

PFP(\phi_{P,x})(y) is defined as the value of the fixed point of (P_i) on y if there is a fixed point, else as false. Since Ps are properties of arity k, there are at most 2^{n^k} values for the P_is, so with a polynomial-space counter we can check if there is a loop or not.

It has been proven that FO(PFP) is equal to PSPACE. This definition is equivalent to FO(2^{n^{O(1)}}).

Least Fixed Point is P

FO(LFP) is the set of boolean queries definable in FO(PFP) where the partial fixed point is limited to be monotone. That is, if the second order variable is P, then P_i(x) always implies P_{i%2B1}(x).

We can guarantee monotonicity by restricting the formula \phi to only contain positive occurrences of P (that is, occurrences preceded by an even number of negations). We can alternatively describe LFP(\phi_{P,x}) as PFP(\psi_{P,x}) where \psi(P,x)=\phi(P,x)\vee P(x).

Due to monotonicity, we only add vectors to the truth table of P, and since there are only n^k possible vectors we will always find a fixed point before n^k iterations. Hence it can be shown that FO(LFP)=P. This definition is equivalent to FO(n^{O(1)}).

Transitive closure is NL

FO(TC) is the set of boolean queries definable in FO with a transitive closure (TC) operator.

TC is defined this way: let k be a positive integer and u,v,x,y be vector of k variables. Then TC(\phi_{u,v})(x,y) is true if there exist n vectors of variables (z_i) such that z_1=x, z_n=y, and for all i<n, \phi(z_i,z_{i%2B1}) is true. Here, \phi is a formula written in FO(TC) and \phi(x,y) means that the variables u and v are replaced by x and y.

This class is equal to NL.

Deterministic transitive closure is L

FO(DTC) is defined as FO(TC) where the transitive closure operator is deterministic. This means that when we apply DTC(\phi_{u,v}), we know that for all u, there exists at most one v such that \phi(u,v).

We can suppose that DTC(\phi_{u,v}) is syntactic sugar for TC(\psi_{u,v}) where \psi(u,v)=\phi(u,v)\wedge \forall x, (x=v \vee \neg \psi(u,x)).

It has been shown that this class is equal to L.

Normal form

Any formula with a fixed point (resp. transitive cosure) operator can without loss of generality be written with exactly one application of the operators applied to 0 (resp. 0,(n-1))

Iterating

We will define first-order with iteration, 'FO[t(n)]'; here t(n) is a (class of) functions from integers to integers, and for different classes of functions t(n) we will obtain different complexity classes FO[t(n)].

In this section we will write (\forall x P) Q to mean (\forall x (P\Rightarrow Q)) and (\exists x P) Q to mean (\exists x (P \vee Q)). We first need to define quantifier blocks (QB), a quantifier block is a list (Q_1 x_1, phi_1)...(Q_k x_k, phi_k) where the phi_is are quantifier-free FO-formulae and Q_is are either \forall or \exists. If Q is a quantifiers block then we will call [Q]^{t(n)} the iteration operator, which is defined as Q written t(n) time. One should pay attention that here there are k*t(n) quantifiers in the list, but only k variables and each of those variable are used t(n) times.

We can now define FO[t(n)] to be the FO-formulae with an iteration operator whose exponent is in the class t(n), and we obtain those equalities:

Logic without arithmetical relations

Let the successor relation, succ, be a binary relation such that \rm{succ}(x,y) is true if and only if x%2B1=y.

Over first order logic, succ is strictly less expressive than <, which is less expressive than +, which is less expressive than bit. + and \times are as expressive as bit.

Using successor to define bit

It is possible to define the plus and then the bit relations with a deterministic transitive closure.

\rm{plus}(a,b,c)=(\rm{DTC}_{v,x,y,z} \rm{succ}(v,y) \land
\rm{succ}(z,x)) (a,b,c,0) and

\rm{bit}(a,b)=(\rm{DTC}_{a,b,a',b'}\psi)(a,b,1,0) with

\psi=\text{if } b=0 \text{ then }
(\text{if } \exists m(a=m%2Bm%2B1) \text{ then }(a'=1\land b'=0)\text{ else }
\bot)\text{ else } (\rm{succ}(b',b) \land (a%2Ba=a'\lor
a%2Ba%2B1=a')

This just means that when we query for bit 0 we check the parity, and go to (1,0) if a is odd(which is an accepting state), else we reject. If we check a bit b>0, we divide a by 2 and check bit b-1.

Hence it makes no sense to speak of operators with successor alone, without the other predicates.

Logics without successor

On logic without succ, +, \times, < or bit, the equality becomes that FO(LFP) is equal to relational-P and FO(PFP) is relational-PSPACE, the classes P and PSPACE over relational machines.[2]

The Abiteboul-Vianu Theorem states that FO(LFP)=FO(PFP) if and only if FO(<,LFP)=FO(<,PFP), hence if and only if P=PSPACE. This result has been extended to other fixpoints.[2] This shows that the order problem in first order is more a technical problem than a fundamental one.

References

  1. ^ N. Immerman Descriptive complexity (1999 Springer)
  2. ^ a b Serge Abiteboul, Moshe Y. Vardi, Victor Vianu: logics, relational machines, and computational complexity Journal of the ACM (JACM) archive, Volume 44 , Issue 1 (January 1997), Pages: 30-56, ISSN:0004-5411

External links